1 Introduction

Decisions made under uncertainty occur in circumstances where a decision-maker is in the position of choosing between potentially competing options. Such decisions often have to be made with no formal scientific support and often do not need much formal support (Keeney, 2004). Decision-supporting research can offer formal support by prescribing a recommended course of action. In general, this is done with the assumption that decision-makers seek maximum expected utility. This means we must assume that the decision maker will want the option with the greatest positive outcome according to their stated outcomes of interest. The decision-making processes can be categorized into two levels of decision making: 1) a decision aimed at influencing an outcome where the expected choice is the one which maximizes the desired outcome, based on information currently available to the decision maker and 2) a decision on how to allocate resources to reduce the uncertainty in the previous decision problem, i.e., to increase the current information to improve the likelihood of choosing the option with the most beneficial outcome.

Such decisions are often made in policy settings like in healthcare, where a choice to fund the health of one part of society often comes at the cost of another, assuming limited budget allocation (Claxton et al., 2015). This choice involves costs, benefits and risks in the form of possible damage and trade-offs in comparison to a baseline and or other decision options.

Our own work has covered such situations regarding the nutritional costs of agricultural expansion in Uganda (C. W. Whitney et al., 2018), the choice between different siltation management options in Burkina Faso (Lanzanova et al., 2019), the management of competing water resources in Kenya (Luedeling et al., 2015), and deciding between agroforestry interventions in Vietnam (Do et al., 2020). In these cases, we apply, adopt and update the classical processes of Decision Analysis (Howard & Abbas, 2015; Jeffrey, 1990; Keeney, 1982). To achieve this, we co-generate impact pathways of decisions to provide the structure for a mathematical formula that is then used to estimate the expected utility. We perform a quantitative analysis (Luedeling et al., 2022) of welfare-based decision-making processes using Monte Carlo simulations. In our work, we interpret the welfare-based decision problem as a von Neumann-Morgenstern utility function and apply the ‘Value of Information Analysis’ to assign a value to a certain reduction in uncertainty or, equivalently, increase in information.

There are a number of ways that decisions made under uncertainty can be supported. Through this review we seek to generate a complete overview of all the applications of similar processes that aim to support decision-making under uncertainty.

We offer an overview of methods that are applied to support decisions under uncertainty. We focus on summarizing the state-of-the-art methods in decisions. We identify, define, and evaluate research by extracting the decision-supporting methods, describing them and their approach to supporting decisions. We present a protocol-driven comprehensive review and synthesis of the various methods applied for decision support. We offer an overview of where these methods are currently applied and some interpretation about where these applications can be expanded.

2 Review and Synthesis

3 Protocol-driven approach to reviewing decision support methods

3.1 Methods

3.2 Literature collection

Protocol: We follow the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) (Figure 2). We gather data from the Web of Science Core Collection curated database of published, peer-reviewed and larger and more diverse published articles, preprints, theses, books, and other relevant scientific content in Google Scholar. After much deliberation and discussion amongst the authors, the final search query consisted of the following keywords.

Decision + Intervention OR policy + Uncertainty + Expert OR stakeholder + Model OR monte carlo OR simulation OR computer assisted + value of information OR information accuracy

“decision”+(“intervention”OR”policy”)+“uncertainty”+(“expert”OR”stakeholder”)+(“model”OR”monte carlo”OR”simulation”OR”Bayesian”OR”computer assisted”)+ (“value of information”OR”information accuracy”)

From Web of Science, 14 records were collected in May 2023, and from Google Scholar, 17500+ records were collected in April and May 2023. Associated papers and articles from the search records were also gathered as secondary data collection. Some had shared arXiv or doi links, other parts of shared contributions and assorted affiliations that led to new papers. We applied specific inclusion criteria to this large set of records to reduce the number of records covered under this study. An independent screening of the collected records is performed to confirm their eligibility for the study.

3.2.1 Inclusion criteria:

  1. Record should be in the English language,
  2. Scientific articles, reports and others are included,
  3. Must involve some form of decision-making or decision-supporting aspect,

3.2.2 Reasons for exclusion

  • Annual reports, syllabus, catalogs of studies, unrelated legal documents, published bibliography collections, duplicates, course notes, preprints (where we already have the published journal article)
  • All non-scientific reports and case studies

3.3 Methodology extraction

We use Keeney’s representation of where formal decision support is required and applied as motivation and framework for the assessment - Keeney’s judgment about how 10,000 decisions are typically made - his personal histogram of 10,000 decisions being faced by numerous decision makers @ref(#fig:01_keeney) (Keeney, 2004).

(#fig:01_keeney) Keeney’s judgment about how 10,000 decisions are typically made - his personal histogram of 10,000 decisions being faced by numerous decision makers
(#fig:01_keeney) Keeney’s judgment about how 10,000 decisions are typically made - his personal histogram of 10,000 decisions being faced by numerous decision makers
(#fig:02_prisma) Overview of the methods review process following the the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)
(#fig:02_prisma) Overview of the methods review process following the the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)

We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) @ref(fig:02_prisma) SystematicReviewOverviewFigure

We preserved our search string on searchRxiv (C. Whitney, 2023)

Google Scholar (GS) yielded 17,600 results but without an asterisk for decision* and model* it yielded just 17,400 results. This is a bit counter intuitive; the ’*’ should widen rather than narrow the search. If anything, we should expect the opposite effect and the larger return in the former search. Then these 200 additional results would say something like ‘models’ or ‘decisions’ but not ‘model’ or ‘decision’. These should be kept.

‘Google Scholar advanced search’ is much less targeted (2.4 million hits): with all of the words: decision intervention policy uncertainty model with at least one of the words: “value of information”, “information accuracy”, “intervention”, “policy”, “model*“,”monte carlo” “simulation”, “Bayesian”, “computer assisted”, “expert”, “stakeholder”, where my words occur ‘anywhere in the article’.

After importing papers to Zotero we:

  • Ran a PDF search in Zotero to find missing PDFs. Manual PDF search for ~300. Look up web pages that report findings to find original publication, proceedings (20), papers (40), thesis (30)
  • Remove lots of spurious things that turned up in the results: Annual reports (5+), syllabus / catalogs of studies (10+), unrelated legal documents (3+), published bibliography collections, duplicates (50+), course notes (2), preprints (where we already have the published journal article 5)
  • Add papers to our collection that appear in the bibliography and syllabus that were not already in the search (40+). Google Scholar suggested these papers because of the citation to relevant work.

In looking for the PDFs and following up with some of the thesis, notes, syllabi etc. in the Google Scholar returned list we added additional associated papers and articles. Some had shared arXiv or doi links, other parts of shared contributions and assorted affiliations that lead to new papers. Sometimes the class notes and syllabus that were in the scholar search return were there because they listed papers that were relevant to our search terms. These were sometimes already in our main collection and sometimes not. When they were not in our collection, we added them.

Web of Science - 14 results

CABI - 13 results

Decision + Intervention OR policy + Uncertainty + Expert OR stakeholder + Model OR “monte carlo” OR simulation OR “computer assisted” + “value of information” OR “information accuracy”

4 Data Import and Preprocessing

4.1 Reading Bibliographic Data

We import bibliographic records using the bib2df package (Ottolinger, 2024), converting our BibTeX file into a structured data frame for systematic processing.

# Read the bib file
bib_data <- bib2df::bib2df("bib/23_Methods_Review_Holistic_Systems.bib")

Our BibTeX contains (n = 15200) citations.

4.2 Data Cleaning and Feature Engineering

We prepare the data using dplyr (Wickham et al., 2023), for annotation tracking and text consolidation.

# Basic cleaning and text preparation
clean_bib_data <- bib_data %>%
  dplyr::mutate(
    has_annotation = !is.na(ANNOTE) & ANNOTE != "",
    annotation_text = ifelse(has_annotation, ANNOTE, ""),
    full_text = paste(TITLE, ANNOTE, sep = " "),
    year = as.numeric(YEAR)
  )

(3499 of the citations have annotations.

4.3 Method Term Discovery

We search the bibliographic record for methodological terms from titles, abstracts, keywords, and annotations.

library(tidytext)

# Combine all text fields for comprehensive analysis
all_text_data <- clean_bib_data %>%
  mutate(
    combined_text = paste(
      TITLE, 
      ifelse(!is.na(ABSTRACT), ABSTRACT, ""),
      ifelse(!is.na(KEYWORDS), KEYWORDS, ""),
      ifelse(!is.na(ANNOTE), ANNOTE, ""),
      sep = " "
    )
  )

# Tokenize all text fields to discover method terms
method_terms_comprehensive <- all_text_data %>%
  select(combined_text) %>%
  unnest_tokens(word, combined_text) %>%
  count(word, sort = TRUE) %>%
  filter(n > 5, nchar(word) > 3) %>%  # Filter for meaningful terms
  anti_join(stop_words)  # Remove common stop words
(#tab:display_method_terms)Top 30 Most Frequent Terms Across All Text Fields
Term Frequency
information 6344
decision 3187
management 2912
analysis 2572
research 2373
data 2186
study 2166
model 2159
based 2123
risk 2033
supply 1871
date 1778
export 1755
cited 1742
2024 1731
publisher 1638
health 1551
chain 1525
systems 1483
cost 1456
approach 1431
uncertainty 1429
performance 1281
system 1227
bayesian 1191
economic 1184
technology 1153
knowledge 1113
review 1054
social 1037
# Analyze multi-word method terms using n-grams
method_bigrams_comprehensive <- all_text_data %>%
  select(combined_text) %>%
  unnest_tokens(bigram, combined_text, token = "ngrams", n = 2) %>%
  count(bigram, sort = TRUE) %>%
  filter(n > 2)  # Keep only terms appearing multiple times
(#tab:display_bigrams)Top 20 Most Frequent Bigrams (Two-Word Terms)
Bigram Frequency
of the 4200
in the 3119
cited by 1733
05 may 1730
2024 cited 1730
date 05 1730
export date 1730
may 2024 1730
on the 1586
and the 1426
supply chain 1419
of information 1399
decision making 1294
to the 1225
value of 1202
for the 1123
of a 897
based on 779
in a 755
this paper 727
# Use pattern matching to find potential method terms in context
method_patterns <- c(
  "analysis", "model", "method", "approach", "framework", 
  "technique", "algorithm", "estimation", "evaluation", "simulation",
  "optimization", "bayesian", "statistical", "stochastic"
)

# Extract sentences containing methodological terms
method_sentences <- all_text_data %>%
  select(combined_text) %>%
  unnest_tokens(sentence, combined_text, token = "sentences") %>%
  filter(str_detect(tolower(sentence), paste(method_patterns, collapse = "|"))) %>%
  sample_n(10)  # Sample for review

4.4 Sample Sentences Containing Methodological Terms

Example_Sentences
the model predicts that individuals acquire more information when the stakes are higher.
an evaluation of the measurement properties within an analysis of covariance structures framework indicated that the operational measures developed here largely satisfy the criteria for unidimensionality, convergent, discriminant, and predictive validity.
conventional methods like grid search and random search can be computationally demanding.
development of a new cost-effectiveness model this chapter provides a detailed description of the modelling approach, the estimation of input parameters used to populate the model and the key assumptions underpinning the cost-effectiveness results.
multi-attribute identifiability analysis also yielded optimal parameter values that were temporally less variable than those estimated using streamflow alone.
the role of value for money in public insurance coverage decisions for drugs in {australia}: a retrospective analysis 1994-2004
we show that this model can be systematically approximated by a single-location inventory problem.
the development of the {economic} impacts of {smoking} {in} {pregnancy} ({esip}) model for measuring the impacts of smoking and smoking cessation during pregnancy
embedding security messages in existing processes: {a} pragmatic and effective approach to information security culture change.
the incorporation of mechanism improves the confidence in predictions made for a variety of conditions, while the statistical methods provide an empirical basis for parameter estimation and allow for estimates of predictive uncertainty.
# Specifically analyze author-provided keywords
if("KEYWORDS" %in% names(clean_bib_data)) {
  keyword_analysis <- clean_bib_data %>%
    filter(!is.na(KEYWORDS)) %>%
    select(KEYWORDS) %>%
    unnest_tokens(keyword, KEYWORDS, token = "regex", pattern = ";|,") %>%
    mutate(keyword = str_trim(tolower(keyword))) %>%
    count(keyword, sort = TRUE) %>%
    filter(n > 1, nchar(keyword) > 2)
}
(#tab:display_keywords)Top 20 Author-Provided Keywords
Keyword Frequency
bayesian 550
read cw 195
monte carlo simulation 155
bin 139
bayesian decision analysis 66
stanford 46
supply chain management 36
value of information 33
machine learning 17
information technology 14
risk management 14
supply chain 14
evpi 13
resilience 13
social science / sociology / general 11
trust 11
risk assessment 10
social media 10
optimization 9
review 9
# method categories
method_categories <- list(
  bayesian = c("bayesian", "bayes", "mcmc", "markov chain", "prior", "posterior"),
  simulation = c("simulation", "monte carlo", "stochastic", "agent-based", "discrete event"),
  optimization = c("optimization", "linear programming", "nonlinear programming", "heuristic"),
  statistical = c("regression", "anova", "time series", "survival analysis", "mixed model"),
  decision_analysis = c("decision analysis", "decision tree", "markov model", "value of information", "voi"),
  machine_learning = c("machine learning", "neural network", "random forest", "svm", "clustering"),
  multi_criteria = c("multi-criteria", "multi criteria", "analytic hierarchy", "ahp", "topsis"),
  economic_evaluation = c("cost-effectiveness", "cost-benefit", "cost-utility", "economic evaluation"),
  risk_analysis = c("risk analysis", "risk assessment", "sensitivity analysis", "uncertainty analysis")
)

4.5 Automated Method Detection

We apply automated classification using a custom dictionary-based algorithm (detect_methods_enhanced.R) that identifies methodological approaches from text content. The purrr package (Wickham & Henry, 2025) enables efficient vectorized processing across all records. Our dual detection (full text and titles only) enables confidence scoring, where methods appearing in titles receive higher reliability weights for subsequent analysis.

source("R/detect_methods_enhanced.R")
# Apply method detection
clean_bib_data <- clean_bib_data %>%
  mutate(
    detected_methods = map_chr(full_text, detect_methods_enhanced),
    methods_from_title = map_chr(TITLE, detect_methods_enhanced)
  )

4.6 Confidence Scoring

We assign confidence levels to method classifications based on detection location. Methods identified in titles receive higher confidence than those found only in annotations, enabling quality-weighted analysis.

source("R/add_confidence_scores.R")
# Apply confidence scoring
clean_bib_data <- add_confidence_scores(clean_bib_data)

4.7 Method Detection Validation

We validate the automated classification by reporting key metrics including detection rates and confidence level distribution.

(#tab:method_detection_check_clean)Method Detection Summary Statistics
Metric Count
Total papers 15200
Papers with methods detected 1828
Papers with VOI 263
High confidence classifications 1823
Medium confidence classifications 5
Low confidence classifications 0

4.8 Classification Examples

We display representative examples of automated method classifications to illustrate detection accuracy and confidence assignment.

# Show some examples
sample_results <- clean_bib_data %>%
  filter(detected_methods != "") %>%
  select(TITLE, detected_methods, method_confidence) %>%
  head(10)

print(sample_results)
## # A tibble: 10 × 3
##    TITLE                                      detected_methods method_confidence
##    <chr>                                      <chr>            <chr>            
##  1 A hierarchical bayesian approach for inco… bayesian         high             
##  2 Integrating monitoring and optimization m… optimization     high             
##  3 A systematic review and economic evaluati… economic_evalua… high             
##  4 The rocky road to extended simulation fra… simulation; opt… high             
##  5 A practical guide to value of information… decision_analys… high             
##  6 Late pregnancy ultrasound to screen for a… decision_analys… high             
##  7 Efficient research design: {Using} value-… economic_evalua… high             
##  8 Economic evaluation of transperineal vers… economic_evalua… high             
##  9 The cost-effectiveness of a novel {SIAsco… economic_evalua… high             
## 10 Bayesian design and analysis of external … bayesian         high

4.9 Method Frequency Analysis

We quantify and visualize the distribution of methodological approaches across the literature using frequency analysis and bar chart visualization via ggplot2 (Wickham et al., 2025).

source("R/analyze_method_frequency.R")

method_results <- analyze_method_frequency(clean_bib_data)
print(method_results$plot)
TRUE

(#fig:method_results_plot)TRUE

print(method_results$counts)
## # A tibble: 9 × 2
##   detected_methods        n
##   <chr>               <int>
## 1 economic_evaluation   433
## 2 decision_analysis     396
## 3 bayesian              380
## 4 simulation            238
## 5 risk_analysis         209
## 6 optimization          169
## 7 multi_criteria         77
## 8 machine_learning       64
## 9 statistical            22

The method_results reveal dominant methodological paradigms and identifies less common approaches.

4.10 Value of Information Analysis

We conduct specialized analysis of Value of Information (VOI) literature, examining methodological patterns and temporal trends.

source("R/analyze_voi_papers.R")
# Analyze VOI papers
voi_results <- analyze_voi_papers(clean_bib_data)

# Access results
voi_results$summary
## [1] "VOI papers: 263"
knitr::kable(voi_results$methods, caption = "Methods in VOI Papers")
(#tab:voi_results_methods)Methods in VOI Papers
detected_methods n
decision_analysis 263
economic_evaluation 20
bayesian 5
simulation 4
optimization 2
risk_analysis 2
voi_results$plot
TRUE

(#fig:voi_results_plot)TRUE

analyze_voi_papers shows the methodological evolution and application contexts specific to VOI.

4.11 Results Export

We compile all classification results into a structured results_table and export for external validation and secondary analysis.

# Create a comprehensive results table
results_table <- clean_bib_data %>%
  select(
    TITLE, 
    YEAR = year,
    HAS_ANNOTATION = has_annotation,
    DETECTED_METHODS = detected_methods, 
    CONFIDENCE = method_confidence,
    VOI = has_voi,
    BAYESIAN = has_bayesian,
    SIMULATION = has_simulation,
    UNCERTAINTY = has_uncertainty
  )

# Export for manual verification
write.csv(results_table, "data/method_detection_results.csv", row.names = FALSE)

4.12 Analysis Summary

We provide a summary of the papers with summarise from (Wickham et al., 2023).

# Create summary report
summary_stats <- clean_bib_data %>%
  summarise(
    total_papers = n(),
    papers_with_methods = sum(detected_methods != ""),
    high_confidence = sum(method_confidence == "high")
  )

knitr::kable(summary_stats, caption = "Analysis Summary")
(#tab:summary_report)Analysis Summary
total_papers papers_with_methods high_confidence
15200 1828 1823

4.13 Dominant Methodological Approaches

We identify prevalent methodological approaches using frequency analysis and tidyr (Wickham et al., 2024) for data restructuring.

# Show top methods
top_methods <- clean_bib_data %>%
  filter(detected_methods != "") %>%
  separate_rows(detected_methods, sep = "; ") %>%
  count(detected_methods, sort = TRUE) %>%
  head(10)

print(top_methods)
## # A tibble: 9 × 2
##   detected_methods        n
##   <chr>               <int>
## 1 economic_evaluation   433
## 2 decision_analysis     396
## 3 bayesian              380
## 4 simulation            238
## 5 risk_analysis         209
## 6 optimization          169
## 7 multi_criteria         77
## 8 machine_learning       64
## 9 statistical            22

top_methods reveals the methodological ecosystem’s core components, highlighting established practices and potential gaps in decision support under uncertainty literature.

4.14 VOI Method Validation

We examine Value of Information (VOI) papers specifically to assess classification plausibility and confidence distribution within this focused methodological domain.

# Quick validation - check if our detection makes sense
validation_check <- clean_bib_data %>%
  filter(has_voi) %>%
  select(TITLE, detected_methods, method_confidence) %>%
  arrange(desc(method_confidence))

print(validation_check)
## # A tibble: 263 × 3
##    TITLE                                      detected_methods method_confidence
##    <chr>                                      <chr>            <chr>            
##  1 A practical guide to value of information… decision_analys… high             
##  2 Late pregnancy ultrasound to screen for a… decision_analys… high             
##  3 A cost-effectiveness and value of informa… decision_analys… high             
##  4 Expected value of information analysis to… decision_analys… high             
##  5 Are head-to-head trials of biologics need… decision_analys… high             
##  6 Efficient value of information computation decision_analys… high             
##  7 Supplier quality improvement: {The} value… decision_analys… high             
##  8 Use of value of information in {UK} healt… decision_analys… high             
##  9 Decisions on further research for predict… decision_analys… high             
## 10 Towards social learning in water related … decision_analys… high             
## # ℹ 253 more rows

validation_check ensures classification accuracy.

4.15 High-Reliability Classifications

We examine the subset of classifications receiving the highest confidence rating, where methods were detected in both titles and full text.

# Check papers with high confidence
high_confidence <- clean_bib_data %>%
  filter(method_confidence == "high") %>%
  select(TITLE, detected_methods, methods_from_title)

print(high_confidence)
## # A tibble: 1,823 × 3
##    TITLE                                     detected_methods methods_from_title
##    <chr>                                     <chr>            <chr>             
##  1 A hierarchical bayesian approach for inc… bayesian         bayesian          
##  2 Integrating monitoring and optimization … optimization     optimization      
##  3 A systematic review and economic evaluat… economic_evalua… economic_evaluati…
##  4 The rocky road to extended simulation fr… simulation; opt… simulation; optim…
##  5 A practical guide to value of informatio… decision_analys… decision_analysis 
##  6 Late pregnancy ultrasound to screen for … decision_analys… decision_analysis…
##  7 Efficient research design: {Using} value… economic_evalua… economic_evaluati…
##  8 Economic evaluation of transperineal ver… economic_evalua… economic_evaluati…
##  9 The cost-effectiveness of a novel {SIAsc… economic_evalua… economic_evaluati…
## 10 Bayesian design and analysis of external… bayesian         bayesian          
## # ℹ 1,813 more rows

4.16 Interactive Reference Browser

We provide an interactive reference browser using the DT package (Xie et al., 2025) for easy navigation of the analyzed literature collection.

# Minimal interactive table
DT::datatable(
  clean_bib_data %>%
    select(
      Author = AUTHOR,
      Year = YEAR,
      Title = TITLE
    ) %>%
    arrange(desc(Year), Author),
  options = list(
    pageLength = 5,
    lengthMenu = c(5, 10, 20),
    dom = 'tip'  # table, info, pagination only
  ),
  caption = "Reference List",
  rownames = FALSE,
  filter = "none"  # remove filter to save space
)

(#fig:datatable_clean_bib_data)TRUE

References

Claxton, K., Martin, S., Soares, M., Rice, N., Spackman, E., Hinde, S., Devlin, N., Smith, P. C., & Sculpher, M. (2015). Methods for the estimation of the national institute for health and care excellence cost-effectiveness threshold. Health Technology Assessment, 19(14), 1–504. https://doi.org/10.3310/hta19140
Do, H., Luedeling, E., & Whitney, C. (2020). Decision analysis of agroforestry options reveals adoption risks for resource-poor farmers. Agronomy for Sustainable Development, 40, 1–12.
Howard, R. A., & Abbas, A. E. (2015). Foundations of decision analysis. Prentice Hall.
Jeffrey, R. C. (1990). The logic of decision (2d. edition, Ed.). University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/L/bo3640589.html
Keeney, R. L. (1982). Feature articledecision analysis: An overview. Operations Research, 30(5), 803–838. https://doi.org/10.1287/opre.30.5.803
Keeney, R. L. (2004). Making better decision makers. Decision Analysis, 1(4), 193–204. https://doi.org/10.1287/deca.1040.0009
Lanzanova, D., Whitney, C., Shepherd, K., & Luedeling, E. (2019). Improving development efficiency through decision analysis: Reservoir protection in burkina faso. Environmental Modelling & Software, 115, 164–175. https://doi.org/10.1016/j.envsoft.2019.01.016
Luedeling, E., Goehring, L., Schiffers, K., Whitney, C., & Fernandes, E. (2022). decisionSupport: Quantitative support of decision making under uncertainty. https://CRAN.R-project.org/package=decisionSupport
Luedeling, E., Oord, A. L., Kiteme, B., Ogalleh, S., Malesu, M., Shepherd, K. D., & De Leeuw, J. (2015). Fresh groundwater for wajirâ”ex-ante assessment of uncertain benefits for multiple stakeholders in a water supply project in northern kenya. Frontiers in Environmental Science, 3. https://doi.org/10.3389/fenvs.2015.00016
Ottolinger, P. (2024). bib2df: Parse a BibTeX file to a data frame. https://docs.ropensci.org/bib2df/
Whitney, C. (2023). Review of methods for supporting decisions under uncertainty. searchRxiv, 2023, 20230007470. https://doi.org/10.1079/searchRxiv.2023.00096
Whitney, C. W., Lanzanova, D., Muchiri, C., Shepherd, K. D., Rosenstock, T. S., Krawinkel, M., Tabuti, J. R. S., & Luedeling, E. (2018). Probabilistic decision tools for determining impacts of agricultural development policy on household nutrition. Earth’s Future, 6(3), 359–372. https://doi.org/10.1002/2017EF000765
Wickham, H., Chang, W., Henry, L., Pedersen, T. L., Takahashi, K., Wilke, C., Woo, K., Yutani, H., Dunnington, D., & van den Brand, T. (2025). ggplot2: Create elegant data visualisations using the grammar of graphics. https://ggplot2.tidyverse.org
Wickham, H., François, R., Henry, L., Müller, K., & Vaughan, D. (2023). Dplyr: A grammar of data manipulation. https://dplyr.tidyverse.org
Wickham, H., & Henry, L. (2025). Purrr: Functional programming tools. https://purrr.tidyverse.org/
Wickham, H., Vaughan, D., & Girlich, M. (2024). Tidyr: Tidy messy data. https://tidyr.tidyverse.org
Xie, Y., Cheng, J., Tan, X., & Aden-Buie, G. (2025). DT: A wrapper of the JavaScript library DataTables. https://github.com/rstudio/DT